Dynamic Programming and Bellman’s Principle

نویسنده

  • Piermarco Cannarsa
چکیده

In control theory one is given a system-usually described by differential equations-that can be influenced by an external action. In optimal control problems, such an action is to be exercised for minimizing a given cost functional. The cost functionals of interest may be of very different nature. In general, they may depend on the state of the system, on the control, and possibly on the system history during a given time interval. A control is optimal if the resulting evolution of the system minimizes the cost.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tail optimality and preferences consistency for intertemporal optimization problems

When an intertemporal optimization problem over a time interval [t0, T ] is linear and can be solved via dynamic programming, the Bellman’s principle holds, and the optimal control map has the desirable feature of being tail-optimal in the right queue; moreover, the optimizer keeps solving the same problem at any time time t with renovated conditions: we will say that he is preferences-consiste...

متن کامل

Integrating Pareto Optimization into Dynamic Programming

Pareto optimization combines independent objectives by computing the Pareto front of the search space, yielding a set of optima where none scores better on all objectives than any other. Recently, it was shown that Pareto optimization seamlessly integrates with algebraic dynamic programming: when scoring schemes A and B can correctly evaluate the search space via dynamic programming, then so ca...

متن کامل

Learning for stochastic dynamic programming

We present experimental results about learning function values (i.e. Bellman values) in stochastic dynamic programming (SDP). All results come from openDP (opendp.sourceforge.net), a freely available source code, and therefore can be reproduced. The goal is an independent comparison of learning methods in the framework of SDP. 1 What is stochastic dynamic programming (SDP) ? We here very roughl...

متن کامل

Sequence Similarity and Dynamic Programming

Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is available for designing such algorithms. The matrix recurrences that typically describe a dynamic programming a...

متن کامل

Algebraic Dynamic Programming

Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is available for designing such algorithms. The matrix recurrences that typically describe a dynamic programming a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011